In this document, we attempt to analyze the objective and subjective features aquired from three hand-controllers for device and surgeon performance assessment. The studied hand-controllers were: microsurgery-specific Excalibur3DS-I (“Excalibur”), and two general purpose haptic devices: High Definition and Phantom Premium 3.0 (“Premium”) and sigma.7 (“Sigma7”). A group of 30 participants selected among the experienced surgeons as well as the residents and fellows in the Department of Clinical Neurosciences at the University of Calgary were recruited to perform a peg-in-hole task consisting of cotton strip micromanipulation in a limited workspace, aimed to reflect the maneuvers performed during microsurgery. The multimedia supplementary files for the task and device structure is available at https://github.com/AmirBGitHub/smart-handcontroller/.
The reader can show any code chunk by clicking on the code button. We chose to make the default for the code hidden since we: (a) wanted to improve the readability of this document; and (b) assumed that the readers will not be interested in reading every code chunk.
The snippet below documents the list of R libraries that were used in this research. For convenience, we used the pacman package since it allows for installing/loading the needed libraries in one step.
rm(list = ls()) # clear global environment
graphics.off() # close all graphics
library(pacman) # needs to be installed first
p_load(
R.matlab,
plotly,
extrafont,
grDevices,
gridExtra,
stats,
qlcMatrix,
dplyr,
stringr,
tidyverse,
utils,
reshape2,
ggplot2,
devtools,
knitr,
kableExtra,
data.table,
RColorBrewer,
readxl,
abind,
fmsb,
chron,
jmv,
factoextra,
imputeTS,
ISLR,
yarrr,
caret,
e1071,
psych,
nnet,
lmtest,
pscl
)
In the snippet below, we extract the “.mat” files from the hand-controllers MATLAB software.
In this section, the data from survey was extracted and put into R dataframe. The data include status of being trained (Training Status), years of experience as a surgeon (Year Level Category) expressed in four levels of 1-3, 4-6, 7-9, and 10 years, age range (Age Range Category) expressed in four levels of 23-30, 31-45, 46-60, and >61 years old. Binary level data on prior experience with hand-controller (Prior Experience) and having the experience of playing video games (Video Game Experience) were also included in the survey questions.
The questionnaire consisted of 21 questions labeled as A1-A21 with A1-A4 having multiple choice answers and A5-A21 having a selection scale of 0-100 with the increments of 10. The actual questions are as follows:
A1) How is the hand-controller’s workspace for surgery?
A2) How was the physical size of the hand-controller?
A3) How was the hand-controller’s weight?
A4) How many times were you unable to orient the tool in your desired direction?
A5) How was the smoothness of motion for the hand-controller?
A6) How was the smoothness of positioning in 3D space?
A7) How was the smoothness of orienting in 3D space?
A8) How comfortable was the oriention of the hand-controller?
A9) How confidently could move the hand-controller?
A10) How realistic was the force feedback?
A11) How many unexpected forces did you feel from the hand cotroller not matching its position in the screen?
A12) How helpful was the force feedback in accomplishing the task?
A13) Rate the difficulty of positioning the tool tip using the hand-controller.
A14) How was the agreement between the tool motion on the screen and your intended movement?
A15) Rate the difficulty in orienting the tool to pick up the pegs.
A16) Rate the difficulty of roll movement.
A17) Rate the difficulty of pitch movement.
A18) Rate the difficulty of yaw movement.
A19) Rate your fatigue level in the upper limb after operating the hand-controller.
A20) Rate your fatigue level in the wrist after operating the hand-controller.
A21) Rate the appearance of the hand-controller.
In order to have a consistent visualization, the results of questionnaire were normalized between 0-1 with numbers closer to one showing a better performance. Below are the visualized summary results for the data averaged across the participants and the summary of demographical and experience information. The survey summary results showed that 25 participants selected Excalibur as the superior hand-controller, while 2 participants selected Premium and 3 selected Sigma7. The visual results of the participants demographic and experience data (scaled between 0-1) presented in the following spider chart show that the participants selecting Excalibur tend to have higher experience and age levels.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/DataReadSurvey.RData",
sep = ""
)
)
# converting date format completion time to double
tm_E <- c()
tm_P <- c()
tm_S <- c()
for (j in 1:30) {
tm_E <- abind(tm_E,
60 * 24 * as.numeric(times(strsplit(
toString(data.frame(subj_data_survey_main[5 * j, 23])[[1]]), " "
)[[1]][2])))
tm_P <- abind(tm_P,
60 * 24 * as.numeric(times(strsplit(
toString(data.frame(subj_data_survey_main[5 * j, 18])[[1]]), " "
)[[1]][2])))
tm_S <- abind(tm_S,
60 * 24 * as.numeric(times(strsplit(
toString(data.frame(subj_data_survey_main[5 * j, 13])[[1]]), " "
)[[1]][2])))
}
tm_brk_E <-
setNames(
data.frame(matrix(
data = NA, nrow = 30, ncol = 4
)),
c(
"completion.time.brk.1",
"completion.time.brk.2",
"completion.time.brk.3",
"completion.time.brk.4"
)
)
tm_brk_P <-
setNames(
data.frame(matrix(
data = NA, nrow = 30, ncol = 4
)),
c(
"completion.time.brk.1",
"completion.time.brk.2",
"completion.time.brk.3",
"completion.time.brk.4"
)
)
tm_brk_S <-
setNames(
data.frame(matrix(
data = NA, nrow = 30, ncol = 4
)),
c(
"completion.time.brk.1",
"completion.time.brk.2",
"completion.time.brk.3",
"completion.time.brk.4"
)
)
for (j in 1:30) {
for (k in 1:4) {
tm_brk_E[j, k] <-
60 * 24 * as.numeric(times(strsplit(toString(
data.frame(subj_data_survey_main[(5 * (j - 1) + k), 22])[[1]]
), " ")[[1]][2]))
tm_brk_P[j, k] <-
60 * 24 * as.numeric(times(strsplit(toString(
data.frame(subj_data_survey_main[(5 * (j - 1) + k), 17])[[1]]
), " ")[[1]][2]))
tm_brk_S[j, k] <-
60 * 24 * as.numeric(times(strsplit(toString(
data.frame(subj_data_survey_main[(5 * (j - 1) + k), 12])[[1]]
), " ")[[1]][2]))
}
}
# putting the survey results in one file and standardizing based different desire answers
desire_answers <-
c(3,
3,
3,
1,
100,
100,
100,
100,
100,
100,
0,
100,
0,
100,
0,
0,
0,
0,
0,
0,
100)
subj_survey <- list(
cbind(
setNames(data.frame(c(1:30)),
"subject.num"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 3])[[1]]))),
"train.status"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 4]),
"years.cat"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 5]),
"age.range.cat"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 6])[[1]])) - 1),
"prior.experience.status"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 7])[[1]])) - 1),
"videogame.experience.status"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 9]),
"order")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
tm_brk_E,
setNames(data.frame(tm_E),
"completion.time.avg"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 24]),
"error.count"),
4 - abs(subj_data_survery_trans[3 * c(1:30), 4:7]
- t(replicate(30, desire_answers[1:4]))),
100 - abs(subj_data_survery_trans[3 * c(1:30), 8:24]
- t(replicate(30, desire_answers[5:21])))
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.P"),
tm_brk_P,
setNames(data.frame(tm_P),
"completion.time.avg"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 19]),
"error.count"),
4 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 4:7]
- t(replicate(30, desire_answers[1:4]))),
100 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 8:24]
- t(replicate(30, desire_answers[5:21])))
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.S"),
tm_brk_S,
setNames(data.frame(tm_S),
"completion.time.avg"),
setNames(data.frame(
subj_data_survey_main[5 * c(1:30) - 4, 14]),
"error.count"),
4 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 4:7]
- t(replicate(30, desire_answers[1:4]))),
100 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 8:24]
- t(replicate(30, desire_answers[5:21])))
)
)
# scaling the data between 0 and 1
subj_survey_norm <- list(
cbind(
setNames(data.frame(c(1:30)),
"subject.num"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 3])[[1]]))),
"train.status"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 4]),
"years.cat"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 5]),
"age.range.cat"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 6])[[1]])) - 1),
"prior.experience.status"),
setNames(data.frame(as.numeric(
factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 7])[[1]])) - 1),
"videogame.experience.status"),
setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 9]),
"order")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
tm_brk_E/200, # /200 for normalization
setNames(data.frame(tm_E)/200,
"completion.time.avg"),
setNames(data.frame(
subj_data_survey_main[5 * c(1:30) - 4, 24])/5, # /5 for normalication
"error.count"),
1 - abs(subj_data_survery_trans[3 * c(1:30), 4:7]
- t(replicate(30, desire_answers[1:4]))) /
4,
1 - abs(subj_data_survery_trans[3 * c(1:30), 8:24]
- t(replicate(30, desire_answers[5:21]))) /
100
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.P"),
tm_brk_P/200,
setNames(data.frame(tm_P)/200,
"completion.time.avg"),
setNames(data.frame(
subj_data_survey_main[5 * c(1:30) - 4, 19])/5,
"error.count"),
1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 4:7]
- t(replicate(30, desire_answers[1:4]))) /
4,
1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 8:24]
- t(replicate(30, desire_answers[5:21]))) /
100
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.S"),
tm_brk_S/200,
setNames(data.frame(tm_S)/200,
"completion.time.avg"),
setNames(data.frame(
subj_data_survey_main[5 * c(1:30) - 4, 14])/5,
"error.count"),
1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 4:7]
- t(replicate(30, desire_answers[1:4]))) /
4,
1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 8:24]
- t(replicate(30, desire_answers[5:21]))) /
100
)
)
# summarizing the survey results
survery_results_E <- t(setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey[[2]])[, 6:28], na.rm = TRUE),
digits = 4
)), "Excalibur.avgerage"))
survery_results_P <- t(setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey[[3]])[, 6:28], na.rm = TRUE),
digits = 4
)), "Premium.avgerage"))
survery_results_S <- t(setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey[[4]])[, 6:28], na.rm = TRUE),
digits = 4
)), "Simga7.avgerage"))
# mean value of each question across all subjects
sm <-
cbind(
setNames(data.frame(c(1:23)), "order"),
setNames(data.frame(
c(
"Completion Time Avg. (x200 sec.)",
"Error Count (x5)",
"A1: Ideal Workspace Size",
"A2: Ideal Hand-controller Size",
"A3: Ideal Grip Weight",
"A4: Tool Orienting Easiness",
"A5: Motion Smoothness",
"A6: Positioning Smoothness",
"A7: Orienting Smoothness",
"A8: Orienting Comfiness",
"A9: Motion Confidence",
"A10: Force Feedback Realness",
"A11: Non-unexpected Forces",
"A12: Force Feedback Usefulness",
"A13: Tooltip Positioning Easiness",
"A14: Motion Expected",
"A15: Pickup Tool Orienting Easiness",
"A16: Tool Roll Easiness",
"A17: Tool Pitch Easiness",
"A18: Tool Yaw Easiness",
"A19: Upper Lim Comfiness",
"A20: Wrist Comfiness",
"A21: Hand-controller Appearance"
)
),
"question.name"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey_norm[[2]]
)[, 6:28],
na.rm = TRUE), digits = 4
)), "Excalibur Average"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey_norm[[3]]
)[, 6:28],
na.rm = TRUE), digits = 4
)), "Premium Average"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_survey_norm[[4]]
)[, 6:28],
na.rm = TRUE), digits = 4
)), "Sigma7 Average")
)
sm$question.name <- factor(sm$question.name , levels = sm$question.name )
df_transformed <- melt(
sm,
id.vars = "question.name",
measure.vars = c("Excalibur Average",
"Premium Average",
"Sigma7 Average")
)
# plot of performance of differenet hand-controllers based on survey
# the plot is colored by the hand-controller groupName 'variable'
# the results are scaled between 0 and 1
ggplot(df_transformed,
aes(
x = question.name,
y = value,
fill = variable,
label = round(value, digits = 2)
)) +
scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) +
geom_bar(stat = "identity") +
theme_minimal() +
geom_text(size = 6, position = position_stack(vjust = 0.5)) +
ggtitle(label = "Performance of Differenet Hand-controllers ",
subtitle = "From the Survey and Manual Recordings") +
labs(
caption = paste(
"Note: Summary results of the questionnaire data",
"averaged across the participants along with\n the completion time",
"averaged per trial and error count results. The data are scaled",
"between 0-1 for a better\n visual representation. Significant variables",
"according to MANOVA tests in Section 2.1 are labeled with '*'."
)
) +
xlab("Performance Metric / Survey Question") +
ylab("Normalized Performance") +
labs(fill = "Hand-controller") +
geom_text(x=1, y=1.35, label="*", size = 10) +
geom_text(x=2, y=0.95, label="*", size = 10) +
geom_text(x=3, y=2.6, label="*", size = 10) +
geom_text(x=5, y=2.8, label="*", size = 10) +
geom_text(x=9, y=2.1, label="*", size = 10) +
geom_text(x=10, y=2.1, label="*", size = 10) +
geom_text(x=11, y=2.2, label="*", size = 10) +
geom_text(x=12, y=1.65, label="*", size = 10) +
geom_text(x=13, y=1.95, label="*", size = 10) +
geom_text(x=15, y=1.8, label="*", size = 10) +
geom_text(x=19, y=1.9, label="*", size = 10) +
geom_text(x=21, y=2.35, label="*", size = 10) +
geom_text(x=22, y=2.15, label="*", size = 10) +
theme(
plot.title = element_text(
size = 40,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 20,
l = 0
)
),
plot.subtitle = element_text(
size = 32,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 20,
l = 0
)
),
plot.caption = element_text(
size = 22,
hjust = 0,
margin = margin(
t = 30,
r = 0,
b = 0,
l = 0
)
),
axis.text.x = element_text(
size = 24,
angle = 45,
hjust = 1,
margin = margin(
t = 0,
r = 0,
b = 20,
l = 0
)
),
axis.text.y = element_blank(),
axis.title.x = element_text(size = 32),
axis.title.y = element_text(size = 32,
margin = margin(
t = 0,
r = 10,
b = 0,
l = 200
)),
legend.text = element_text(size = 22),
legend.title = element_text(size = 24)
)
############################################################################
# plot of subject demographics vs. performance of differenet Hand-controllers
# mean value of performance question results for each subject
qm <- cbind(
setNames(data.frame(c(1:30)), "subject.num"),
setNames(data.frame(rowMeans(
subj_survey_norm[[2]][, 8:28],
na.rm = TRUE
)), "Excalibur Average"),
setNames(data.frame(rowMeans(
subj_survey_norm[[3]][, 8:28],
na.rm = TRUE
)), "Premium Average"),
setNames(data.frame(rowMeans(
subj_survey_norm[[4]][, 8:28],
na.rm = TRUE
)), "Sigma7 Average")
)
# summary of subject demographics picking each device
ss_E <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 1), ]
ss_P <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 2), ]
ss_S <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 3), ]
sd <- cbind(
setNames(data.frame(c(
dim(ss_E)[1],
zapsmall(colMeans(array(ss_E[, 2:6]),
na.rm = TRUE), digits = 2)
)), "Excalibur"),
setNames(data.frame(c(
dim(ss_P)[1],
zapsmall(colMeans(array(ss_P[, 2:6]),
na.rm = TRUE), digits = 2)
)), "Premium"),
setNames(data.frame(c(
dim(ss_S)[1],
zapsmall(colMeans(array(ss_S[, 2:6]),
na.rm = TRUE), digits = 2)
)), "Sigma7")
)
row.names(sd) <-
c(
"Subject Counts Selected Device",
"Training Status",
"Year Level Category",
"Age Range Category",
"Prior Experience",
"Video Game Experience"
)
# To use the fmsb package, I have to add 2 lines to the dataframe:
# the max and min of each topic to show on the plot!
sp_data <- data.frame(t(sd)[, 2:6])
sp_data <- rbind(c(1, 4, 4, 1, 1) , c(0, 1, 1, 0, 0), sp_data)
# making the spider graph
colors_border = c(rgb(0.4, 0.86, 0.66, 0.9),
rgb(1, 0.54, 0.41, 0.9) ,
rgb(0.39, 0.58, 0.92, 0.9))
colors_in = c(rgb(0.4, 0.86, 0.66, 0.4),
rgb(1, 0.54, 0.41, 0.5) ,
rgb(0.39, 0.58, 0.92, 0.4))
radarchart(
sp_data,
axistype = 1 ,
# custom polygon
pcol = colors_border ,
pfcol = colors_in ,
plwd = 4 ,
plty = 1,
# custom the grid
cglcol = "grey",
cglty = 1,
axislabcol = "grey",
caxislabels = seq(0, 1, 0.25),
cglwd = 8,
calcex = 2,
# custom labels
vlabels = c(
"Training Status",
"Year Level Category",
"Age Range Category",
"Prior Experience",
"Video Game Experience"
),
vlcex = 2,
title = "Summary of the Demographics of the Subjects Picking Each Device",
cex.main = 3,
)
legend(
x = 0.7,
y = 1,
legend = c(
"Excalibur: 25 Subjects",
"Premium: 2 Subjects",
"Sigma7: 3 Subject"
),
bty = "n",
pch = 20 ,
col = colors_in ,
text.col = "black",
cex = 2,
pt.cex = 8
)
save(subj_survey,survery_results_E,survery_results_P,survery_results_S,
file = paste("~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData", sep=""))
save(subj_survey_norm,survery_results_E,survery_results_P,survery_results_S,
file = paste("~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData", sep=""))
Following the summary results for the questionnaire, the direct data from the sensors and hand-controllers are extracted and presented in this section. A header snapshot of the raw data are presented showing 6 rows starting from row 5000 for one of the participants. The recorded raw data included time in seconds, tool tip position, force feedback, gimbal and arm angles derived through Denavit–Hartenberg parameters all presented in x, y, and z axes, pinch value, pedal usage, and the number of clutch usage. The experiment trials were repeated 4 times with 4 peg-in-hole tasks within each trial forming a total of 16 experiment trials.
The recorded data were converted to the features associated with the hand-controller users introduced in Hoshyarmanesh et al. 2019, being the sum of arm angles in the hand-controller (Arm Angle Sum), time to complete the task (Completion Time), the root mean squared of force feed back (Force RMS), the sum of gimbal angles in the hand-controller (Gimbal Angle Sum), the number of clutch use during the task penalized with error (Number of Clutches), and the path length at the hand-controller gripper to accomplish the task (Travel Distance). Please note that for Force RMS and Travel Distance the data from slave robot end effector (Kuka) was utilized rather than the hand-controller itself. The calculated features were summarized for each participant by finding the average across the 16 task trials. In order to be consistent with the questionnaire data and for the sake simplicity in our comparisons, the extracted features were normalized between 0-1. Below are the visualized summary results for the data averaged across the participants.
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/DataReadSensor.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/DataReadSensorKuka.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
# Printing the top six rows of Subject 1's data as an example after row 5000
head(subject1_features_E[5000:5500, ]) %>% round(digits = 3)
abbrv_device <- c("E", "P", "S")
completion_time <- array(NA, dim = c(30, 16, 3))
travel_distance <- array(NA, dim = c(30, 16, 3))
num_clutches <- array(NA, dim = c(30, 16, 3))
sum_gimbangle <- array(NA, dim = c(30, 16, 3))
sum_armangle <- array(NA, dim = c(30, 16, 3))
rms_force <- array(NA, dim = c(30, 16, 3))
for (i in 1:3) {
for (j in 1:30) {
err_4trial = subj_survey[[i+1]]$error.count[j]
for (k in 1:16) {
#print(paste0("device: ", i))
#print(paste0("subject: ", j))
#print(paste0("trial: ", k))
task_data <- get(paste0("subject", j,
"_features_",
abbrv_device[i]))[get(paste0("subject", j,
"_features_",
abbrv_device[i]))[, "experim_num"] == k,]
task_data_kuka <- get(paste0("Kuka_subject", j,
"_features_",
abbrv_device[i]))[get(paste0("Kuka_subject", j,
"_features_",
abbrv_device[i]))[, "experim_num"] == k,]
if (dim(task_data)[1] != 0) {
# finding the completion time (CT)
completion_time[j, k, i] = max(task_data[, "time_in_sec"], na.rm =
FALSE)
- min(task_data[, "time_in_sec"], na.rm =
FALSE)
# finding the travel distance (TD)
pos <-
task_data_kuka[, c("position_x", "position_y", "position_z")]
pos_diff <- pos[-1, ] - pos[-nrow(pos), ]
travel_distance[j, k, i] = sum(sum(sqrt(rowSums(pos_diff * pos_diff))),
- (err_4trial/16)*sum(sqrt(rowSums(pos_diff * pos_diff))),
na.rm = TRUE) # removing error periods
# finding the number of clutches (NC)
num_clutches[j, k, i] = sum(max(task_data[, "clutch_num"], na.rm = FALSE),
(err_4trial/16)*max(task_data[, "clutch_num"],
na.rm = FALSE),
na.rm = TRUE) # removing error periods
# finding the sum of the variation of gimbal angle (sigma(delta_theta_4-6))
gimb <-
task_data[, c("gimbangle_x", "gimbangle_y", "gimbangle_z")]
gimb_diff <- gimb[-1, ] - gimb[-nrow(gimb), ]
sum_theta <- colSums(abs(gimb_diff), na.rm = FALSE, dims = 1)
sum_gimbangle[j, k, i] = sum(sum(sum_theta),
-(err_4trial/16)*sum(sum_theta),
na.rm = TRUE) # removing error periods
# finding the sum of the variation of arm angle (sigma(delta_theta_1-3))
arm <-
task_data[, c("armangle_x", "armangle_y", "armangle_z")]
arm_diff <- arm[-1, ] - arm[-nrow(arm), ]
sum_theta <- colSums(abs(arm_diff), na.rm = FALSE, dims = 1)
sum_armangle[j, k, i] = sum(sum(sum_theta),
-(err_4trial/16)*sum(sum_theta),
na.rm = TRUE) # removing error periods
# finding the root mean squared of forces (RMSF)
frc <- task_data_kuka[, c("force_x", "force_y", "force_z")]
rms_force[j, k, i] = sqrt(sum(frc * frc, na.rm = FALSE)/dim(frc * frc)[1])
} else{
completion_time[j, k, i] = NA
travel_distance[j, k, i] = NA
num_clutches[j, k, i] = NA
sum_gimbangle[j, k, i] = NA
sum_armangle[j, k, i] = NA
rms_force[j, k, i] = NA
}
}
}
}
completion_time[completion_time == 0] <- NA
travel_distance[travel_distance == 0] <- NA
#num_clutches[num_clutches == 0] <- NA
sum_gimbangle[sum_gimbangle == 0] <- NA
sum_armangle[sum_armangle == 0] <- NA
rms_force[rms_force == 0] <- NA
completion_time_norm <- completion_time/1000
travel_distance_norm <- travel_distance/500
num_clutches_norm <- num_clutches/10
sum_gimbangle_norm <- sum_gimbangle/20
sum_armangle_norm <- sum_armangle/5
rms_force_norm <- rms_force/2
# putting the sensor results in one file
subj_feature <- list(
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
setNames(data.frame(rowMeans(completion_time[, , 1] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance[, , 1] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches[, , 1] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force[, , 1] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle[, , 1] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle[, , 1] ,na.rm = TRUE)),
"sum.arm.angle")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.P"),
setNames(data.frame(rowMeans(completion_time[, , 2] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance[, , 2] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches[, , 2] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force[, , 2] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle[, , 2] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle[, , 2] ,na.rm = TRUE)),
"sum.arm.angle")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.S"),
setNames(data.frame(rowMeans(completion_time[, , 3] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance[, , 3] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches[, , 3] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force[, , 3] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle[, , 3] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle[, , 3] ,na.rm = TRUE)),
"sum.arm.angle")
)
)
# putting the sensor results in one file (scaled)
subj_feature_norm <- list(
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
setNames(data.frame(rowMeans(completion_time_norm[, , 1] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance_norm[, , 1] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches_norm[, , 1] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force_norm[, , 1] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 1] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle_norm[, , 1] ,na.rm = TRUE)),
"sum.arm.angle")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.P"),
setNames(data.frame(rowMeans(completion_time_norm[, , 2] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance_norm[, , 2] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches_norm[, , 2] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force_norm[, , 2] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 2] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle_norm[, , 2] ,na.rm = TRUE)),
"sum.arm.angle")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.S"),
setNames(data.frame(rowMeans(completion_time_norm[, , 3] ,na.rm = TRUE)),
"completion.time"),
setNames(data.frame(rowMeans(travel_distance_norm[, , 3] ,na.rm = TRUE)),
"travel.distance"),
setNames(data.frame(rowMeans(num_clutches_norm[, , 3] ,na.rm = TRUE)),
"number.clutches"),
setNames(data.frame(rowMeans(rms_force_norm[, , 3] ,na.rm = TRUE)),
"rms.force"),
setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 3] ,na.rm = TRUE)),
"sum.gimbal.angle"),
setNames(data.frame(rowMeans(sum_armangle_norm[, , 3] ,na.rm = TRUE)),
"sum.arm.angle")
)
)
save(
subj_feature,
completion_time,
travel_distance,
num_clutches,
sum_gimbangle,
sum_armangle,
rms_force,
file = paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
sep = ""
)
)
save(
subj_feature_norm,
completion_time_norm,
travel_distance_norm,
num_clutches_norm,
sum_gimbangle_norm,
sum_armangle_norm,
rms_force_norm,
file = paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
sep = ""
)
)
# mean value of each feature across all subjects
sm <-
cbind(
setNames(data.frame(
c(
"Travel Distance (x500 m) ",
"Number of Clutches (x10)",
"Force RMS (x2 N)",
"Gimbal Angle Sum (x20 Rad)",
"Arm Angle Sum (x5 Rad)"
)
),
"feature.name"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_feature_norm[[1]])[, 3:7], na.rm = TRUE), digits = 4
)), "Excalibur Average"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_feature_norm[[2]])[, 3:7], na.rm = TRUE), digits = 4
)), "Premium Average"),
setNames(data.frame(zapsmall(
colMeans(data.matrix(subj_feature_norm[[3]])[, 3:7], na.rm = TRUE), digits = 4
)), "Sigma7 Average")
)
df_transformed <- melt(
sm,
id.vars = "feature.name",
measure.vars = c("Excalibur Average",
"Premium Average",
"Sigma7 Average")
)
# plot of performance of differenet hand-controllers based on sensor
# the plot is colored by the hand-controller groupName 'variable'
# the results are scaled between 0 and 1
ggplot(df_transformed,
aes(
x = feature.name,
y = value,
fill = variable,
label = value
)) +
geom_text(x=2, y=1.87, label="*", size = 12) +
geom_text(x=4, y=1.26, label="*", size = 12) +
geom_text(x=5, y=1.23, label="*", size = 12) +
scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) +
geom_bar(stat = "identity") +
theme_minimal() +
geom_text(size = 6, position = position_stack(vjust = 0.5)) +
ggtitle(label = "Performance Features for Differenet Hand-controllers ",
subtitle = "From the Sensor Recordings") +
labs(
caption = paste(
"Note: Summary results of the performance features",
"from the recorded data by the hand-controller averaged across",
"the\n participants. Data are scaled between 0 and 1. Please note that",
"the arm angle data was not recorded by Sigma7 because\n of its parallel structure.",
"Significant variables according to MANOVA tests in Section 2.2 are labeled with '*'."
)
) +
xlab("Feature from Sensor Data") +
ylab("Normalized Feature") +
labs(fill = "Hand-controller") +
theme(
plot.title = element_text(
size = 36,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 30,
l = 0
)
),
plot.subtitle = element_text(
size = 28,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 50,
l = 0
)
),
plot.caption = element_text(
size = 22,
hjust = 0,
margin = margin(
t = 30,
r = 0,
b = 0,
l = 0
)
),
axis.text.x = element_text(
size = 24,
angle = 45,
hjust = 1,
margin = margin(
t = 0,
r = 0,
b = 20,
l = 0
)
),
axis.text.y = element_blank(),
axis.title.x = element_text(size = 32),
axis.title.y = element_text(size = 32,
margin = margin(
t = 0,
r = 10,
b = 0,
l = 150
)),
legend.text = element_text(size = 22),
legend.title = element_text(size = 24)
)
In this section, the performance metrics as defined in Hoshyarmanesh et al. 2019 were calculated based on the features derived in the previous section. The performance metrics included the hand-controller maneuverability expressed in terms of the level of angular movement over time (Performance (angular maneuver)), time to finish the task (Performance (completion time)), the force exerted over the path length of the trajectory (Performance (force over distance)), the number of clutch use during the task {(Performance (number of clutches)), and the path length of the overall trajectory (Performance (travel distance)).
For the similar reasons mentioned above, the calculated performance metrics were scaled between 0-1. Below are the visualized summary results for the performance metrics data averaged across the participants. In this figure, the values closer to 1 indicate a better performance.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
ct_ref <- 15 # reference completion time 15 sec
td_ref <- 0.2 # reference travel distance 0.2 m
nc_ref <- 1 # reference number of clutches
b <- 25 # b = NC_max - NC_red = 26 - 1 = 25 (max value was actually 62 -> outlier)
# finding the performance (completion time)
perf_u_ct = ct_ref / completion_time
# finding the performance (travel distance)
perf_u_td = td_ref / travel_distance
# finding the performance (number of clutches)
perf_u_nc = sqrt(1 - ((num_clutches - nc_ref) / b) ^ 2)
# finding the performance (angular maneuver)
perf_u_dt = sum_gimbangle / (30*completion_time*travel_distance)
# finding the performance (force over distance)
perf_u_sf = rms_force / (completion_time*travel_distance)
perf_u_ct_all <- array(NA, dim = c(1, 3))
perf_u_td_all <- array(NA, dim = c(1, 3))
perf_u_nc_all <- array(NA, dim = c(1, 3))
perf_u_dt_all <- array(NA, dim = c(1, 3))
perf_u_sf_all <- array(NA, dim = c(1, 3))
for (i in 1:3) {
perf_u_ct_all[, i] = ct_ref / mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) * 2
perf_u_td_all[, i] = td_ref / mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE) * 500
perf_u_nc_all[, i] = sqrt(1 - ((mean( rowMeans(num_clutches[, , i], na.rm = TRUE), na.rm = TRUE) -
nc_ref) / b) ^ 2) / 2
perf_u_dt_all[, i] = mean(rowMeans(sum_gimbangle[, , i], na.rm = TRUE), na.rm = TRUE) /
(30 * mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) *
mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE)) * 2*10^4
perf_u_sf_all[, i] = mean(rowMeans(rms_force[, , i], na.rm = TRUE), na.rm = TRUE) /
(mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) *
mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE)) * 10^4
}
# putting the sensor results in one file
subj_perf <- list(
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
setNames(data.frame(rowMeans(perf_u_ct[, , 1] ,na.rm = TRUE)),
"perf.ct"),
setNames(data.frame(rowMeans(perf_u_td[, , 1] ,na.rm = TRUE)),
"perf.td"),
setNames(data.frame(rowMeans(perf_u_nc[, , 1] ,na.rm = TRUE)),
"perf.nc"),
setNames(data.frame(rowMeans(perf_u_dt[, , 1] ,na.rm = TRUE)),
"perf.dt"),
setNames(data.frame(rowMeans(perf_u_sf[, , 1] ,na.rm = TRUE)),
"perf.sf")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
setNames(data.frame(rowMeans(perf_u_ct[, , 2] ,na.rm = TRUE)),
"perf.ct"),
setNames(data.frame(rowMeans(perf_u_td[, , 2] ,na.rm = TRUE)),
"perf.td"),
setNames(data.frame(rowMeans(perf_u_nc[, , 2] ,na.rm = TRUE)),
"perf.nc"),
setNames(data.frame(rowMeans(perf_u_dt[, , 2] ,na.rm = TRUE)),
"perf.dt"),
setNames(data.frame(rowMeans(perf_u_sf[, , 2] ,na.rm = TRUE)),
"perf.sf")
),
cbind(
setNames(data.frame(c(1:30)),
"subject.num.E"),
setNames(data.frame(rowMeans(perf_u_ct[, , 3] ,na.rm = TRUE)),
"perf.ct"),
setNames(data.frame(rowMeans(perf_u_td[, , 3] ,na.rm = TRUE)),
"perf.td"),
setNames(data.frame(rowMeans(perf_u_nc[, , 3] ,na.rm = TRUE)),
"perf.nc"),
setNames(data.frame(rowMeans(perf_u_dt[, , 3] ,na.rm = TRUE)),
"perf.dt"),
setNames(data.frame(rowMeans(perf_u_sf[, , 3] ,na.rm = TRUE)),
"perf.sf")
)
)
# mean value of each feature across all subjects
sm <-
cbind(
setNames(data.frame(
c(
"Performance (completion time) (x0.5)",
"Performance (travel distance) (x0.002)",
"Performance (number of clutches) (x2)",
"Performance (angular maneuver) (x0.00005)",
"Performance (force over distance) (x0.0001)"
)
),
"perf.name"),
setNames(data.frame(zapsmall(
c(perf_u_ct_all[, 1],
perf_u_td_all[, 1],
perf_u_nc_all[, 1],
perf_u_dt_all[, 1],
perf_u_sf_all[, 1]), digits = 4
)), "Excalibur Average"),
setNames(data.frame(zapsmall(
c(perf_u_ct_all[, 2],
perf_u_td_all[, 2],
perf_u_nc_all[, 2],
perf_u_dt_all[, 2],
perf_u_sf_all[, 2]), digits = 4
)), "Premium Average"),
setNames(data.frame(zapsmall(
c(perf_u_ct_all[, 3],
perf_u_td_all[, 3],
perf_u_nc_all[, 3],
perf_u_dt_all[, 3],
perf_u_sf_all[, 3]), digits = 4
)), "Sigma7 Average")
)
df_transformed <- melt(
sm,
id.vars = "perf.name",
measure.vars = c("Excalibur Average",
"Premium Average",
"Sigma7 Average")
)
# plot of performance of differenet hand-controllers based on sensor
# the plot is colored by the hand-controller groupName 'variable'
# the results are normalized between 0 and 1 (1 showing the best performance)
ggplot(df_transformed,
aes(
x = perf.name,
y = value,
fill = variable,
label = value
)) +
scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) +
geom_bar(stat = "identity") +
theme_minimal() +
geom_text(size = 6, position = position_stack(vjust = 0.5)) +
ggtitle(label = "Calculated Performance Metrics for Differenet Hand-controllers ",
subtitle = "From the Sensor Recordings") +
labs(
caption = paste(
"Note: Summary results of the performance metrics",
"calculated from the performance features\n averaged across the",
"participants. Data are scaled between 0 and 1."
)
) +
xlab("Performance Metric from Sensor Data") +
ylab("Normalized Performace") +
labs(fill = "Hand-controller") +
theme(
plot.title = element_text(
size = 36,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 30,
l = 0
)
),
plot.subtitle = element_text(
size = 28,
face = "bold",
hjust = 0.5,
margin = margin(
t = 0,
r = 0,
b = 50,
l = 0
)
),
plot.caption = element_text(
size = 22,
hjust = 0,
margin = margin(
t = 30,
r = 0,
b = 0,
l = 0
)
),
axis.text.x = element_text(
size = 24,
angle = 45,
hjust = 1,
margin = margin(
t = 0,
r = 0,
b = 20,
l = 0
)
),
axis.text.y = element_blank(),
axis.title.x = element_text(size = 32),
axis.title.y = element_text(size = 32,
margin = margin(
t = 0,
r = 10,
b = 30,
l = 180
)),
legend.text = element_text(size = 22),
legend.title = element_text(size = 24)
)
save(
subj_perf,
perf_u_ct,
perf_u_td,
perf_u_nc,
perf_u_dt,
perf_u_sf,
perf_u_ct_all,
perf_u_td_all,
perf_u_nc_all,
perf_u_dt_all,
perf_u_sf_all,
file = paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/PerformanceUsersAll.RData",
sep = ""
)
)
We also provide the breakdowns of the 4 trials of peg-in-hole task for Completion Time as well as subtasks with 4 different peg shapes. The results for Completion Time , Travel Distance, and the Number of Clutches are shown across the 3 different hand-controllers. The pattern for Completion Time is decreasing through the first 3 trials for Premium and Sigma7, while increasing in the last trial. However, this pattern is decreasing across the 4 trials of Excalibur (with a minor increase in the 2nd trial) showing an improvement through the course of experiment. On the other hand, while the average value of features differ for different hand-controllers, the pattern of change in the features for each subtask of peg manipulation is consistent among different devices. Surprisingly, this pattern is ascending for the Completion Time meaning that the shape of peg has a larger effect on the time rather than being accustomed to the task. In addition, the results indicate a higher values of Travel Distance for Peg 2 as well as the Number of Clutches. The easiest peg was found to be Peg 1 according to all three features.
We anticipated a learning curve both over the four trials and between the three hand-controllers. Although we observed a tendency towards improvement in trial time over the course of the experiments per user, mean fourth trial time was slower than mean third for Premium and Sigma7. During the experiments it was noted that users became more comfortable with the task and therefore faster. However, by the fourth trial users may have experienced muscle and mental fatigue and therefore, less precise in their movements and a slower time to complete the task (Slack et al. 2008). Surprisingly, this pattern showed an improvement among the users for Excalibur that can be attributed to its specific tool design and higher maneuverability which imposes minimal fatigue in such a precise task. The second explanation for the slower fourth time in other hand-controllers may relate to the board design. The subtask completion time breakdowns showed an ascending trend from the first peg to the last peg. The first peg exhibited the fastest task completion time in spite of being the first task, and this is likely because the first peg only required 30 degrees of tool roll, while the subsequent pegs required up to 120 degrees of tool roll. Therefore, the nature of each specific task may plays an important role in the user performance. Moving the pegs from the angled surface to the flat surface was easier than moving the pegs in the opposite direction. It is conceivably more difficult to simultaneously orient a peg at an angle in both the pitch and roll orientations, than when maneuvering in only one dimension.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
sep = ""
)
)
CT_task <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Excalibur", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 1], na.rm = TRUE),
rowMeans(completion_time[, c(2, 6, 10, 14), 1], na.rm = TRUE),
rowMeans(completion_time[, c(3, 7, 11, 15), 1], na.rm = TRUE),
rowMeans(completion_time[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "completion.time")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Premium", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 2], na.rm = TRUE),
rowMeans(completion_time[, c(2, 6, 10, 14), 2], na.rm = TRUE),
rowMeans(completion_time[, c(3, 7, 11, 15), 2], na.rm = TRUE),
rowMeans(completion_time[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "completion.time")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Sigma7", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 3], na.rm = TRUE),
rowMeans(completion_time[, c(2, 6, 10, 14), 3], na.rm = TRUE),
rowMeans(completion_time[, c(3, 7, 11, 15), 3], na.rm = TRUE),
rowMeans(completion_time[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "completion.time"))
)
TD <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Excalibur", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 1], na.rm = TRUE),
rowMeans(travel_distance[, c(2, 6, 10, 14), 1], na.rm = TRUE),
rowMeans(travel_distance[, c(3, 7, 11, 15), 1], na.rm = TRUE),
rowMeans(travel_distance[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "travel.distance")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Premium", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 2], na.rm = TRUE),
rowMeans(travel_distance[, c(2, 6, 10, 14), 2], na.rm = TRUE),
rowMeans(travel_distance[, c(3, 7, 11, 15), 2], na.rm = TRUE),
rowMeans(travel_distance[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "travel.distance")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Sigma7", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 3], na.rm = TRUE),
rowMeans(travel_distance[, c(2, 6, 10, 14), 3], na.rm = TRUE),
rowMeans(travel_distance[, c(3, 7, 11, 15), 3], na.rm = TRUE),
rowMeans(travel_distance[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "travel.distance"))
)
NC <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Excalibur", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 1], na.rm = TRUE),
rowMeans(num_clutches[, c(2, 6, 10, 14), 1], na.rm = TRUE),
rowMeans(num_clutches[, c(3, 7, 11, 15), 1], na.rm = TRUE),
rowMeans(num_clutches[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "number.clutches")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Premium", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 2], na.rm = TRUE),
rowMeans(num_clutches[, c(2, 6, 10, 14), 2], na.rm = TRUE),
rowMeans(num_clutches[, c(3, 7, 11, 15), 2], na.rm = TRUE),
rowMeans(num_clutches[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "number.clutches")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Sigma7", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 3], na.rm = TRUE),
rowMeans(num_clutches[, c(2, 6, 10, 14), 3], na.rm = TRUE),
rowMeans(num_clutches[, c(3, 7, 11, 15), 3], na.rm = TRUE),
rowMeans(num_clutches[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "number.clutches"))
)
##############################################################
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
sep = ""
)
)
CT_trial <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Excalibur", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
setNames(data.frame(c(subj_survey[[2]]$completion.time.brk.1,
subj_survey[[2]]$completion.time.brk.2,
subj_survey[[2]]$completion.time.brk.3,
subj_survey[[2]]$completion.time.brk.4)), "completion.time")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Premium", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
setNames(data.frame(c(subj_survey[[3]]$completion.time.brk.1,
subj_survey[[3]]$completion.time.brk.2,
subj_survey[[3]]$completion.time.brk.3,
subj_survey[[3]]$completion.time.brk.4)), "completion.time")),
cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
setNames(data.frame(rep("Sigma7", 120)), "Device"),
setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
setNames(data.frame(c(subj_survey[[4]]$completion.time.brk.1,
subj_survey[[4]]$completion.time.brk.2,
subj_survey[[4]]$completion.time.brk.3,
subj_survey[[4]]$completion.time.brk.4)), "completion.time"))
)
pirateplot(
formula = completion.time ~ Trial + Device,
data = CT_trial,
main = "Pirateplots of Completion Time Breakdown over Different Trials",
ylab = "Completion Time (seconds - based on manual recordings)",
theme = 2,
pal = "pony",
inf.f.o = .6,
inf.disp = "bean",
point.o = .3,
# Turn up points
bean.b.o = .4,
# Turn down bean borders
quant = c(.1, .9),
# 10th and 90th quantiles
quant.col = "black" # Black quantile lines
)
pirateplot(
formula = completion.time/60 ~ Peg + Device,
data = CT_task,
main = "Pirateplots of Completion Time Breakdown over Different Subtasks",
ylab = "Completion Time (seconds - based on sensor data)",
theme = 2,
pal = "pony",
inf.f.o = .6,
inf.disp = "bean",
point.o = .3,
# Turn up points
bean.b.o = .4,
# Turn down bean borders
quant = c(.1, .9),
# 10th and 90th quantiles
quant.col = "black" # Black quantile lines
)
pirateplot(
formula = travel.distance ~ Peg + Device,
data = TD,
main = "Pirateplots of Travel Distance Breakdown over Different Subtasks",
ylab = "Travel Distance (cm)",
theme = 2,
pal = "pony",
inf.f.o = .6,
inf.disp = "bean",
point.o = .3,
# Turn up points
bean.b.o = .4,
# Turn down bean borders
quant = c(.1, .9),
# 10th and 90th quantiles
quant.col = "black" # Black quantile lines
)
pirateplot(
formula = number.clutches ~ Peg + Device,
data = NC,
main = "Pirateplots of Number of Clutches Breakdown over Different Subtasks",
ylab = "Number of Clutches",
theme = 2,
pal = "pony",
inf.f.o = .6,
inf.disp = "bean",
point.o = .3,
# Turn up points
bean.b.o = .4,
# Turn down bean borders
quant = c(.1, .9),
# 10th and 90th quantiles
quant.col = "black" # Black quantile lines
)
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
df_transformed <-
rbind(
cbind(subj_survey[[1]][, 2:7], subj_survey[[2]][, 6:28],
device = 'Excalibur'),
cbind(subj_survey[[1]][, 2:7], subj_survey[[3]][, 6:28],
device = 'Premium'),
cbind(subj_survey[[1]][, 2:7], subj_survey[[4]][, 6:28],
device = 'Sigma7')
)
In this section, the statistical tests of MANOVA (comparing the subjective performance results considering devices as the factor) and MANCOVA (comparing the subjective performance results considering devices as the factor and the demographic variables as the covariates) were performed on the survey results. Repeated measures ANOVA was not utilized here since the comparisons between the response variables (each question or feature) is not of interest, rather the effects of observed factor (different hand-controllers) on the multivariate responses is important (Friedrich et al. 2018). The significance codes for differences between devices in these tests are provided below each test result. The hand-controllers were significantly different in terms of various survey subjective results. Among them, the completion time (completion.time.avg), error count (error.count), the workspace (A1), the weight (A3), orientation smoothness and comfiness (A7 and A8), unexpected force feedback (A11), and wrist and upper arm fatigue level (A19 and A20) were the most significant responses. Additionally, Year Level Category (years.cat) and Video Game Experience (videogame.experience.status) were the significant covariates.
Overall test summary:
# MANOVA test
res.man <- manova(
cbind(
completion.time.avg,
error.count,
A1,
A2,
A3,
A4,
A5,
A6,
A7,
A8,
A9,
A10,
A11,
A12,
A13,
A14,
A15,
A16,
A17,
A18,
A19,
A20,
A21
) ~ device,
data = df_transformed
)
summary(res.man)
## Df Pillai approx F num Df den Df Pr(>F)
## device 2 1.1169 3.1343 46 114 4.378e-07 ***
## Residuals 78
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.man)
## Response completion.time.avg :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 7798 3899.1 4.6933 0.01189 *
## Residuals 78 64801 830.8
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response error.count :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 29.863 14.9315 4.4475 0.01482 *
## Residuals 78 261.865 3.3572
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A1 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 6.7858 3.3929 9.417 0.0002171 ***
## Residuals 78 28.1031 0.3603
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A2 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1.4656 0.73279 1.8887 0.1581
## Residuals 78 30.2628 0.38798
##
## Response A3 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.6944 1.34721 5.5423 0.005615 **
## Residuals 78 18.9599 0.24308
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A4 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.66 1.33018 1.9452 0.1498
## Residuals 78 53.34 0.68384
##
## Response A5 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 418.6 209.28 0.8854 0.4167
## Residuals 78 18437.0 236.37
##
## Response A6 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1212.6 606.28 2.141 0.1244
## Residuals 78 22087.4 283.17
##
## Response A7 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 6724 3362.1 8.2697 0.0005531 ***
## Residuals 78 31712 406.6
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A8 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5201 2600.44 5.9564 0.003914 **
## Residuals 78 34053 436.58
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A9 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2718.9 1359.46 4.1533 0.01932 *
## Residuals 78 25531.1 327.32
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A10 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5554 2777.03 4.6881 0.01195 *
## Residuals 78 46204 592.36
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A11 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 13829 6914.5 10.836 7.035e-05 ***
## Residuals 78 49771 638.1
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A12 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1705 852.37 1.6836 0.1924
## Residuals 78 39490 506.29
##
## Response A13 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 3353 1676.50 2.6873 0.07436 .
## Residuals 78 48661 623.85
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A14 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 702.6 351.3 1.4748 0.2351
## Residuals 78 18579.5 238.2
##
## Response A15 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 4114 2056.8 3.7431 0.02804 *
## Residuals 78 42861 549.5
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A16 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2903 1451.43 2.0514 0.1354
## Residuals 78 55186 707.51
##
## Response A17 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5737 2868.58 3.9741 0.02272 *
## Residuals 78 56302 721.82
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A18 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2896 1448.02 2.369 0.1003
## Residuals 78 47676 611.23
##
## Response A19 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 6096 3047.92 5.5168 0.005742 **
## Residuals 78 43093 552.48
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A20 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 14356 7178.0 13.82 7.285e-06 ***
## Residuals 78 40513 519.4
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A21 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 701 350.75 0.719 0.4905
## Residuals 78 38053 487.86
##
## 9 observations deleted due to missingness
Overall test summary:
# MANCOVA test
res.manc <- manova(
cbind(
completion.time.avg,
error.count,
A1,
A2,
A3,
A4,
A5,
A6,
A7,
A8,
A9,
A10,
A11,
A12,
A13,
A14,
A15,
A16,
A17,
A18,
A19,
A20,
A21
) ~ device +
train.status + years.cat + age.range.cat + prior.experience.status +
videogame.experience.status,
data = df_transformed
)
summary(res.manc)
## Df Pillai approx F num Df den Df Pr(>F)
## device 2 1.18577 2.97591 46 94 3.897e-06
## years.cat 1 0.60617 3.07830 23 46 0.0005777
## age.range.cat 1 0.32916 0.98132 23 46 0.5045648
## prior.experience.status 1 0.27956 0.77609 23 46 0.7407980
## videogame.experience.status 1 0.41512 1.41951 23 46 0.1539565
## Residuals 68
##
## device ***
## years.cat ***
## age.range.cat
## prior.experience.status
## videogame.experience.status
## Residuals
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.manc)
## Response completion.time.avg :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 7209 3604.5 4.0898 0.02103 *
## years.cat 1 125 124.7 0.1414 0.70803
## age.range.cat 1 116 115.8 0.1314 0.71812
## prior.experience.status 1 661 661.4 0.7505 0.38937
## videogame.experience.status 1 925 925.4 1.0500 0.30914
## Residuals 68 59931 881.3
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response error.count :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 30.958 15.4790 4.4957 0.01466 *
## years.cat 1 2.803 2.8030 0.8141 0.37010
## age.range.cat 1 2.307 2.3067 0.6699 0.41593
## prior.experience.status 1 11.865 11.8652 3.4461 0.06773 .
## videogame.experience.status 1 2.606 2.6056 0.7568 0.38740
## Residuals 68 234.128 3.4431
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A1 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 6.4745 3.2372 8.7659 0.0004102 ***
## years.cat 1 1.0560 1.0560 2.8594 0.0954215 .
## age.range.cat 1 0.8434 0.8434 2.2837 0.1353732
## prior.experience.status 1 0.0458 0.0458 0.1239 0.7259223
## videogame.experience.status 1 0.0148 0.0148 0.0401 0.8418278
## Residuals 68 25.1123 0.3693
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A2 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1.2379 0.61897 1.4578 0.2399
## years.cat 1 0.1586 0.15857 0.3735 0.5432
## age.range.cat 1 0.0059 0.00589 0.0139 0.9066
## prior.experience.status 1 0.0686 0.06858 0.1615 0.6890
## videogame.experience.status 1 0.0030 0.00301 0.0071 0.9332
## Residuals 68 28.8727 0.42460
##
## Response A3 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.5421 1.27103 5.0794 0.008791 **
## years.cat 1 0.0196 0.01959 0.0783 0.780500
## age.range.cat 1 0.6881 0.68809 2.7498 0.101871
## prior.experience.status 1 0.0018 0.00182 0.0073 0.932272
## videogame.experience.status 1 0.0528 0.05281 0.2110 0.647425
## Residuals 68 17.0156 0.25023
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A4 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.417 1.2085 2.0792 0.13290
## years.cat 1 3.177 3.1765 5.4651 0.02235 *
## age.range.cat 1 0.284 0.2842 0.4890 0.48678
## prior.experience.status 1 0.524 0.5239 0.9013 0.34578
## videogame.experience.status 1 2.074 2.0745 3.5691 0.06313 .
## Residuals 68 39.524 0.5812
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A5 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 518.9 259.44 1.2138 0.303430
## years.cat 1 1450.2 1450.23 6.7849 0.011282 *
## age.range.cat 1 110.8 110.82 0.5185 0.473971
## prior.experience.status 1 140.9 140.87 0.6591 0.419725
## videogame.experience.status 1 1649.2 1649.25 7.7160 0.007068 **
## Residuals 68 14534.6 213.74
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A6 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1484.6 742.31 2.6582 0.07735 .
## years.cat 1 582.2 582.20 2.0848 0.15336
## age.range.cat 1 10.1 10.13 0.0363 0.84952
## prior.experience.status 1 381.4 381.40 1.3658 0.24662
## videogame.experience.status 1 694.3 694.31 2.4863 0.11948
## Residuals 68 18989.3 279.25
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A7 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 7773.3 3886.6 9.9294 0.0001647 ***
## years.cat 1 197.0 197.0 0.5033 0.4804732
## age.range.cat 1 1.9 1.9 0.0048 0.9447893
## prior.experience.status 1 14.2 14.2 0.0362 0.8497085
## videogame.experience.status 1 521.1 521.1 1.3313 0.2526113
## Residuals 68 26617.2 391.4
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A8 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 4851.4 2425.72 5.3231 0.007117 **
## years.cat 1 81.1 81.14 0.1781 0.674374
## age.range.cat 1 632.2 632.16 1.3872 0.242979
## prior.experience.status 1 2.7 2.68 0.0059 0.939085
## videogame.experience.status 1 261.6 261.56 0.5740 0.451301
## Residuals 68 30987.7 455.70
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A9 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2044.6 1022.31 3.5533 0.03406 *
## years.cat 1 247.5 247.55 0.8604 0.35691
## age.range.cat 1 2.1 2.15 0.0075 0.93139
## prior.experience.status 1 400.0 400.04 1.3904 0.24244
## videogame.experience.status 1 785.9 785.87 2.7314 0.10300
## Residuals 68 19564.4 287.71
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A10 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5112 2556.11 4.5210 0.01434 *
## years.cat 1 328 328.34 0.5807 0.44866
## age.range.cat 1 546 545.70 0.9652 0.32937
## prior.experience.status 1 82 81.77 0.1446 0.70491
## videogame.experience.status 1 2952 2952.40 5.2219 0.02543 *
## Residuals 68 38446 565.39
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A11 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 12673 6336.4 10.1694 0.0001368 ***
## years.cat 1 82 81.7 0.1312 0.7183312
## age.range.cat 1 26 26.0 0.0417 0.8387788
## prior.experience.status 1 4 4.2 0.0067 0.9349954
## videogame.experience.status 1 1160 1160.5 1.8625 0.1768408
## Residuals 68 42370 623.1
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A12 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1270.1 635.03 1.4153 0.24991
## years.cat 1 2698.3 2698.31 6.0139 0.01677 *
## age.range.cat 1 1304.3 1304.34 2.9070 0.09276 .
## prior.experience.status 1 521.9 521.87 1.1631 0.28463
## videogame.experience.status 1 273.7 273.72 0.6101 0.43748
## Residuals 68 30510.4 448.68
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A13 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 3905 1952.40 3.1979 0.04706 *
## years.cat 1 390 390.22 0.6392 0.42680
## age.range.cat 1 25 24.54 0.0402 0.84169
## prior.experience.status 1 5 5.06 0.0083 0.92772
## videogame.experience.status 1 2985 2984.67 4.8887 0.03040 *
## Residuals 68 41515 610.52
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A14 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1146.8 573.39 2.6127 0.08069 .
## years.cat 1 4.2 4.23 0.0193 0.89004
## age.range.cat 1 125.9 125.88 0.5736 0.45147
## prior.experience.status 1 179.3 179.31 0.8170 0.36924
## videogame.experience.status 1 1162.1 1162.13 5.2952 0.02446 *
## Residuals 68 14923.7 219.47
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A15 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 3066 1533.1 2.8139 0.06697 .
## years.cat 1 182 181.8 0.3336 0.56544
## age.range.cat 1 52 52.1 0.0956 0.75815
## prior.experience.status 1 82 82.0 0.1505 0.69930
## videogame.experience.status 1 3191 3191.4 5.8576 0.01819 *
## Residuals 68 37049 544.8
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A16 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2665 1332.4 2.1389 0.125643
## years.cat 1 1476 1476.4 2.3699 0.128332
## age.range.cat 1 477 477.4 0.7663 0.384435
## prior.experience.status 1 4 3.6 0.0058 0.939398
## videogame.experience.status 1 5334 5333.6 8.5619 0.004663 **
## Residuals 68 42361 623.0
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A17 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5043 2521.43 3.4304 0.03807 *
## years.cat 1 634 634.15 0.8628 0.35625
## age.range.cat 1 176 175.89 0.2393 0.62628
## prior.experience.status 1 18 18.24 0.0248 0.87528
## videogame.experience.status 1 665 664.51 0.9041 0.34506
## Residuals 68 49981 735.01
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A18 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2594 1297.21 2.2881 0.10922
## years.cat 1 171 170.86 0.3014 0.58483
## age.range.cat 1 376 376.44 0.6640 0.41800
## prior.experience.status 1 657 656.85 1.1586 0.28556
## videogame.experience.status 1 1691 1691.08 2.9828 0.08869 .
## Residuals 68 38552 566.95
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A19 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 5548 2774.19 5.1957 0.007947 **
## years.cat 1 416 415.91 0.7789 0.380573
## age.range.cat 1 192 192.27 0.3601 0.550451
## prior.experience.status 1 0 0.01 0.0000 0.996504
## videogame.experience.status 1 1677 1677.22 3.1412 0.080817 .
## Residuals 68 36308 533.94
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A20 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 15819 7909.7 16.7654 1.205e-06 ***
## years.cat 1 249 248.9 0.5276 0.4701
## age.range.cat 1 225 224.8 0.4765 0.4924
## prior.experience.status 1 30 30.1 0.0638 0.8014
## videogame.experience.status 1 1262 1261.7 2.6743 0.1066
## Residuals 68 32082 471.8
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response A21 :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 1144.0 572.0 1.4547 0.240635
## years.cat 1 2273.4 2273.4 5.7817 0.018923 *
## age.range.cat 1 3173.0 3173.0 8.0696 0.005934 **
## prior.experience.status 1 271.0 271.0 0.6892 0.409350
## videogame.experience.status 1 150.9 150.9 0.3837 0.537705
## Residuals 68 26737.8 393.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## 15 observations deleted due to missingness
In the snippet below, the post-hoc Tukey’s tests were applied on the significant variables obtained in the previous section. The confidence intervals within the family-wise confidence level graphs that does not include zero indicates a significant different between the corresponding pair of hand-controllers.
# completion.time.avg
print("completion.time.avg:")
## [1] "completion.time.avg:"
tk.lm <- lm(completion.time.avg
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur 17.583333 -0.04115869 35.20783 0.0506736
## Sigma7-Excalibur 22.925000 5.30050798 40.54949 0.0072498
## Sigma7-Premium 5.341667 -12.28282536 22.96616 0.7507495
plot(tukey.test, sub="for Completion Time", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# error.count
print("error.count:")
## [1] "error.count:"
tk.lm <- lm(error.count
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur 1.4666667 0.3729454 2.5603880 0.0054450
## Sigma7-Excalibur 0.6793103 -0.4237993 1.7824200 0.3109912
## Sigma7-Premium -0.7873563 -1.8904660 0.3157533 0.2101975
plot(tukey.test, sub="for Error Count", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A1
print("A1:")
## [1] "A1:"
tk.lm <- lm(A1
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -0.07142857 -0.4549965 0.3121394 0.8969577
## Sigma7-Excalibur -0.60714286 -0.9873898 -0.2268959 0.0007714
## Sigma7-Premium -0.53571429 -0.9159612 -0.1554674 0.0033322
plot(tukey.test, sub="for A1", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A3
print("A3:")
## [1] "A3:"
tk.lm <- lm(A3
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -0.4285714 -0.74425452 -0.1128883 0.0048588
## Sigma7-Excalibur -0.1330049 -0.44595478 0.1799449 0.5698889
## Sigma7-Premium 0.2955665 -0.01738335 0.6085164 0.0682112
plot(tukey.test, sub="for A3", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A7
print("A7:")
## [1] "A7:"
tk.lm <- lm(A7
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -24.66667 -37.067038 -12.266295 0.0000242
## Sigma7-Excalibur -13.66667 -26.067038 -1.266295 0.0271586
## Sigma7-Premium 11.00000 -1.400371 23.400371 0.0926331
plot(tukey.test, sub="for A7", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A8
print("A8:")
## [1] "A8:"
tk.lm <- lm(A8
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -21.5 -34.068178 -8.931822 0.0002920
## Sigma7-Excalibur -15.0 -27.568178 -2.431822 0.0151089
## Sigma7-Premium 6.5 -6.068178 19.068178 0.4369766
plot(tukey.test, sub="for A8", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A9
print("A9:")
## [1] "A9:"
tk.lm <- lm(A9
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -12.500000 -24.05055 -0.9494473 0.0307086
## Sigma7-Excalibur -15.666667 -27.21722 -4.1161140 0.0048630
## Sigma7-Premium -3.166667 -14.71722 8.3838860 0.7907591
plot(tukey.test, sub="for A9", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A10
print("A10:")
## [1] "A10:"
tk.lm <- lm(A10
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -14.833333 -29.45759 -0.2090763 0.0460408
## Sigma7-Excalibur -20.500000 -35.12426 -5.8757430 0.0034779
## Sigma7-Premium -5.666667 -20.29092 8.9575903 0.6266987
plot(tukey.test, sub="for A10", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A11
print("A11:")
## [1] "A11:"
tk.lm <- lm(A11
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -10.00000 -25.59120 5.591199 0.2822846
## Sigma7-Excalibur -31.66667 -47.25787 -16.075468 0.0000164
## Sigma7-Premium -21.66667 -37.25787 -6.075468 0.0038059
plot(tukey.test, sub="for A11", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A15
print("A15:")
## [1] "A15:"
tk.lm <- lm(A15
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -17.063218 -31.547755 -2.578682 0.0167377
## Sigma7-Excalibur -12.166667 -26.527929 2.194595 0.1134022
## Sigma7-Premium 4.896552 -9.587985 19.381089 0.7001895
plot(tukey.test, sub="for A15", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A17
print("A17:")
## [1] "A17:"
tk.lm <- lm(A17
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -21.12069 -37.41757 -4.82381 0.0075059
## Sigma7-Excalibur -14.50000 -30.65818 1.65818 0.0877779
## Sigma7-Premium 6.62069 -9.67619 22.91757 0.5984164
plot(tukey.test, sub="for A17", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A19
print("A19:")
## [1] "A19:"
tk.lm <- lm(A19
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -19.166667 -33.278971 -5.054363 0.0047997
## Sigma7-Excalibur -5.166667 -19.278971 8.945637 0.6587051
## Sigma7-Premium 14.000000 -0.112304 28.112304 0.0523273
plot(tukey.test, sub="for A19", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# A20
print("A20:")
## [1] "A20:"
tk.lm <- lm(A20
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -30.666667 -44.586633 -16.746700 0.0000031
## Sigma7-Excalibur -8.333333 -22.253300 5.586633 0.3313368
## Sigma7-Premium 22.333333 8.413367 36.253300 0.0007104
plot(tukey.test, sub="for A20", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
sep = ""
)
)
df_transformed <-
rbind(
cbind(subj_survey[[1]][, 2:7], subj_feature[[1]][, 2:7],
device = 'Excalibur'),
cbind(subj_survey[[1]][, 2:7], subj_feature[[2]][, 2:7],
device = 'Premium'),
cbind(subj_survey[[1]][, 2:7], subj_feature[[3]][, 2:7],
device = 'Sigma7')
)
In this section, the statistical tests of MANOVA (comparing the subjective performance results considering devices as the factor) and MANCOVA (comparing the subjective performance results considering devices as the factor and the demographic variables as the covariates) were performed on the sensor feature results. The significance codes for differences between devices in these tests are provided below each test result. The hand-controllers were significantly different in terms of various sensor feature results. Among them, the arm angle sum (sum.arm.angle), force RMS (rms.force), and travel distance (travel.distance) were the significant features for performance.
Overall test summary:
# MANOVA test
res.man <- manova(
cbind(
travel.distance,
number.clutches,
sum.gimbal.angle,
sum.arm.angle,
rms.force
) ~ device,
data = df_transformed
)
summary(res.man)
## Df Pillai approx F num Df den Df Pr(>F)
## device 1 0.45993 9.1975 5 54 2.239e-06 ***
## Residuals 58
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.man)
## Response travel.distance :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 10607 10607.4 4.0936 0.04766 *
## Residuals 58 150291 2591.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response number.clutches :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 27.92 27.9178 4.7391 0.03357 *
## Residuals 58 341.67 5.8909
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response sum.gimbal.angle :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 7.434 7.4342 1.9008 0.1733
## Residuals 58 226.841 3.9110
##
## Response sum.arm.angle :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 0.1019 0.10192 0.2797 0.5989
## Residuals 58 21.1364 0.36442
##
## Response rms.force :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 7.5072 7.5072 19.773 3.998e-05 ***
## Residuals 58 22.0201 0.3797
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## 30 observations deleted due to missingness
Overall test summary:
# MANCOVA test
res.manc <- manova(
cbind(
travel.distance,
number.clutches,
sum.gimbal.angle,
sum.arm.angle,
rms.force
) ~ device +
train.status + years.cat + age.range.cat + prior.experience.status +
videogame.experience.status,
data = df_transformed
)
summary(res.manc)
## Df Pillai approx F num Df den Df Pr(>F)
## device 1 0.53655 10.6511 5 46 7.746e-07
## years.cat 1 0.26393 3.2988 5 46 0.01253
## age.range.cat 1 0.08610 0.8667 5 46 0.51069
## prior.experience.status 1 0.07235 0.7175 5 46 0.61357
## videogame.experience.status 1 0.05774 0.5637 5 46 0.72719
## Residuals 50
##
## device ***
## years.cat *
## age.range.cat
## prior.experience.status
## videogame.experience.status
## Residuals
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.manc)
## Response travel.distance :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 10917 10916.9 4.7173 0.03463 *
## years.cat 1 3241 3241.3 1.4006 0.24222
## age.range.cat 1 14 14.5 0.0063 0.93729
## prior.experience.status 1 967 967.2 0.4180 0.52092
## videogame.experience.status 1 1254 1254.1 0.5419 0.46508
## Residuals 50 115711 2314.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response number.clutches :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 28.944 28.9435 5.1373 0.02778 *
## years.cat 1 18.347 18.3474 3.2565 0.07716 .
## age.range.cat 1 0.722 0.7217 0.1281 0.72192
## prior.experience.status 1 0.774 0.7739 0.1374 0.71248
## videogame.experience.status 1 6.224 6.2235 1.1046 0.29830
## Residuals 50 281.701 5.6340
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response sum.gimbal.angle :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 9.241 9.2413 2.4281 0.1255
## years.cat 1 0.969 0.9693 0.2547 0.6160
## age.range.cat 1 2.104 2.1042 0.5529 0.4606
## prior.experience.status 1 0.504 0.5044 0.1325 0.7173
## videogame.experience.status 1 3.456 3.4557 0.9080 0.3452
## Residuals 50 190.295 3.8059
##
## Response sum.arm.angle :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 0.1468 0.14681 0.4267 0.51659
## years.cat 1 1.1950 1.19505 3.4736 0.06823 .
## age.range.cat 1 0.0360 0.03597 0.1046 0.74778
## prior.experience.status 1 0.1027 0.10274 0.2986 0.58717
## videogame.experience.status 1 0.3075 0.30753 0.8939 0.34897
## Residuals 50 17.2016 0.34403
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response rms.force :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 1 8.0102 8.0102 25.3881 6.503e-06 ***
## years.cat 1 2.5317 2.5317 8.0241 0.006637 **
## age.range.cat 1 1.0458 1.0458 3.3146 0.074655 .
## prior.experience.status 1 0.5740 0.5740 1.8192 0.183483
## videogame.experience.status 1 0.3666 0.3666 1.1619 0.286251
## Residuals 50 15.7755 0.3155
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## 34 observations deleted due to missingness
In the snippet below, the post-hoc Tukey’s tests were applied on the significant variables obtained in the previous section. The confidence intervals within the family-wise confidence level graphs that does not include zero indicates a significant different between the corresponding pair of hand-controllers.
# travel.distance
print("travel.distance:")
## [1] "travel.distance:"
tk.lm <- lm(travel.distance
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur 26.59247 -20.15842 73.34335 0.3683126
## Sigma7-Excalibur 57.27647 10.52559 104.02736 0.0122268
## Sigma7-Premium 30.68401 -16.06688 77.43489 0.2662590
plot(tukey.test, sub="for Travel Distance", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# number.clutches
print("number.clutches:")
## [1] "number.clutches:"
tk.lm <- lm(number.clutches
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur 1.36425347 -0.2730307 3.001538 0.1215174
## Sigma7-Excalibur 1.33664931 -0.3006349 2.973933 0.1318936
## Sigma7-Premium -0.02760417 -1.6648883 1.609680 0.9991094
plot(tukey.test, sub="for Number of Clutches", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# rms.force
print("rms.force:")
## [1] "rms.force:"
tk.lm <- lm(rms.force
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur 0.7074439 0.3622516 1.0526361 0.0000137
## Sigma7-Excalibur 0.1148601 -0.2303322 0.4600524 0.7080579
## Sigma7-Premium -0.5925838 -0.9377760 -0.2473915 0.0002774
plot(tukey.test, sub="for Force RMS", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# sum.gimbal.angle
print("sum.gimbal.angle:")
## [1] "sum.gimbal.angle:"
tk.lm <- lm(sum.gimbal.angle
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -0.7039968 -2.131046 0.7230527 0.4704285
## Sigma7-Excalibur -1.7934731 -3.220523 -0.3664236 0.0098541
## Sigma7-Premium -1.0894763 -2.516526 0.3375732 0.1689437
plot(tukey.test, sub="for Gimbal Angle Sum", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# sum.arm.angle
print("sum.arm.angle:")
## [1] "sum.arm.angle:"
tk.lm <- lm(sum.arm.angle
~ device,
data = df_transformed)
tk.av <- aov(tk.lm)
tukey.test <- TukeyHSD(tk.av)
tukey.test
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = tk.lm)
##
## $device
## diff lwr upr p adj
## Premium-Excalibur -0.08243028 -0.3944329 0.2295723 0.5989286
plot(tukey.test, sub="for Arm Angle Sum", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/PerformanceUsersAll.RData",
sep = ""
)
)
df_transformed <-
rbind(
cbind(subj_survey[[1]][, 2:7], subj_perf[[1]][, 2:6],
device = 'Excalibur'),
cbind(subj_survey[[1]][, 2:7], subj_perf[[2]][, 2:6],
device = 'Premium'),
cbind(subj_survey[[1]][, 2:7], subj_perf[[3]][, 2:6],
device = 'Sigma7')
)
Overall test summary:
# MANOVA test
res.man <- manova(
cbind(
perf.ct,
perf.td,
perf.nc,
perf.dt,
perf.sf
) ~ device,
data = df_transformed
)
summary(res.man)
## Df Pillai approx F num Df den Df Pr(>F)
## device 2 0.32826 3.2988 10 168 0.0006278 ***
## Residuals 87
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.man)
## Response perf.ct :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.0009326 0.00046628 2.9034 0.06017 .
## Residuals 87 0.0139721 0.00016060
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response perf.td :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.00003454 1.7271e-05 1.5943 0.2089
## Residuals 87 0.00094245 1.0833e-05
##
## Response perf.nc :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.002123 0.00106156 1.4067 0.2505
## Residuals 87 0.065653 0.00075464
##
## Response perf.dt :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 3.4540e-12 1.7271e-12 0.7907 0.4568
## Residuals 87 1.9004e-10 2.1844e-12
##
## Response perf.sf :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.9580e-09 1.4791e-09 2.5649 0.08274 .
## Residuals 87 5.0171e-08 5.7668e-10
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Overall test summary:
# MANCOVA test
res.manc <- manova(
cbind(
perf.ct,
perf.td,
perf.nc,
perf.dt,
perf.sf
) ~ device +
train.status + years.cat + age.range.cat + prior.experience.status +
videogame.experience.status,
data = df_transformed
)
summary(res.manc)
## Df Pillai approx F num Df den Df Pr(>F)
## device 2 0.34545 3.09001 10 148 0.00134 **
## years.cat 1 0.07375 1.16241 5 73 0.33593
## age.range.cat 1 0.03839 0.58286 5 73 0.71298
## prior.experience.status 1 0.03168 0.47768 5 73 0.79178
## videogame.experience.status 1 0.02117 0.31569 5 73 0.90204
## Residuals 77
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Summary of the analysis of variance model:
# Look to see which differ
summary.aov(res.manc)
## Response perf.ct :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.0008901 0.00044505 2.5666 0.08336 .
## years.cat 1 0.0001274 0.00012735 0.7344 0.39411
## age.range.cat 1 0.0000169 0.00001694 0.0977 0.75549
## prior.experience.status 1 0.0001676 0.00016755 0.9663 0.32870
## videogame.experience.status 1 0.0001290 0.00012898 0.7438 0.39113
## Residuals 77 0.0133520 0.00017340
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Response perf.td :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.00004680 2.3399e-05 2.1004 0.1294
## years.cat 1 0.00000280 2.8005e-06 0.2514 0.6175
## age.range.cat 1 0.00001917 1.9170e-05 1.7208 0.1935
## prior.experience.status 1 0.00001303 1.3031e-05 1.1697 0.2828
## videogame.experience.status 1 0.00000775 7.7471e-06 0.6954 0.4069
## Residuals 77 0.00085781 1.1140e-05
##
## Response perf.nc :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 0.001918 0.00095882 1.5328 0.2224
## years.cat 1 0.001340 0.00133977 2.1418 0.1474
## age.range.cat 1 0.000028 0.00002755 0.0440 0.8343
## prior.experience.status 1 0.000272 0.00027190 0.4347 0.5117
## videogame.experience.status 1 0.000058 0.00005827 0.0932 0.7610
## Residuals 77 0.048167 0.00062554
##
## Response perf.dt :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 2.8630e-12 1.4314e-12 0.6178 0.5418
## years.cat 1 4.4260e-12 4.4262e-12 1.9102 0.1709
## age.range.cat 1 2.4730e-12 2.4727e-12 1.0672 0.3048
## prior.experience.status 1 3.4300e-13 3.4270e-13 0.1479 0.7016
## videogame.experience.status 1 1.2400e-12 1.2399e-12 0.5351 0.4667
## Residuals 77 1.7841e-10 2.3171e-12
##
## Response perf.sf :
## Df Sum Sq Mean Sq F value Pr(>F)
## device 2 3.4120e-09 1.7059e-09 2.6939 0.07399 .
## years.cat 1 3.1900e-10 3.1851e-10 0.5030 0.48034
## age.range.cat 1 8.9000e-11 8.8780e-11 0.1402 0.70911
## prior.experience.status 1 3.0000e-11 2.9760e-11 0.0470 0.82896
## videogame.experience.status 1 1.2000e-11 1.2250e-11 0.0193 0.88974
## Residuals 77 4.8761e-08 6.3326e-10
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## 6 observations deleted due to missingness
In this section, a relationship between the objective performance metrics derived from the sensor data within the hand-controller and the subjective performance measures collected through the survey from users was established through regression analysis. The goal was to introduce a guideline for defining the influencing device factors and managing them in order to optimize the performance. A PCA identified the highest influencing factors. The performance measures that demonstrated positive correlation or a significance level (\(p <\) 0.05) between the devices were included in the regression model. The measures were selected in a matter that maximized the significance of the regression model and the Adjusted \(R-squared\) value.
The first graph is the scree plot that visualizes eigenvalues and shows the percentage of variances explained by each principal component. The second figure shows the positive correlated variables point to the same side of the plot and the negative correlated variables point to opposite sides. The longer arrows (colored green) have the largest contribution to the principal component. In the third figure, each point corresponds to the principal component of a user’s response on a device. The mean and covariance of responses on each device are plotted as the shaded oval shapes with the respective centers. The segregation between each device shows their differences in the users’ opinion.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
sep = ""
)
)
df_transformed <-
rbind(
cbind(subj_survey[[1]][, 2:7], subj_survey[[2]][, 6:28], device = 'Excalibur'),
cbind(subj_survey[[1]][, 2:7], subj_survey[[3]][, 6:28], device = 'Premium'),
cbind(subj_survey[[1]][, 2:7], subj_survey[[4]][, 6:28], device = 'Sigma7')
)
subj_dev <- numeric(length = 90)
devices <- c('E', 'P', 'S')
for (j in 1:3) {
for (i in 1:30) {
subj_dev[30 * (j - 1) + i] <- sprintf("%s %s", devices[j], i)
}
}
row.names(df_transformed) <- subj_dev
res.pca <- prcomp( ~ .,
data = df_transformed[, 9:29],
na.action = na.omit,
scale = TRUE)
# Visualize eigenvalues (scree plot). Show the percentage of variances
# explained by each principal component
fviz_eig(res.pca, barcolor = "lightgoldenrod", barfill = "lightgoldenrod", addlabels = TRUE)
# Graph of variables. Positive correlated variables point to the same side of the plot.
# Negative correlated variables point to opposite sides of the graph.
fviz_pca_var(
res.pca,
col.var = "contrib",
# Color by contributions to the PC
gradient.cols =
c("goldenrod1",
"gold1",
"gold",
"lightgoldenrod",
"aquamarine2",
"aquamarine3",
"aquamarine4"
),
repel = TRUE # Avoid text overlapping
)
# Grouping the variables
groups <- as.factor(df_transformed$device[1:82])
fviz_pca_ind(
res.pca,
col.ind = groups,
# color by groups
palette = c("mediumaquamarine", "salmon", "cornflowerblue"),
addEllipses = TRUE,
# Concentration ellipses
ellipse.type = "confidence",
legend.title = "Groups",
repel = TRUE
)
The regression model that relates the sensor-based performance feature to the mean values of subjective metrics is provided below. Metrics that demonstrated significance in the test and large positive correlations and in the PCA were selected for the regression analysis, i.e. A5, A6, A7, A8, A9, A13, A15, A16, A17, and A18 for the regression analysis. The coefficients estimate the trends while \(R-squared\) represents the scatter around the regression line. A \(p =\) 0.002 shows the significance of the model regardless of the value for \(R-squared\) models. Small \(R-squared\) values are problematic when precise predictions are needed. However, in this case, our goal was to identify the trends that optimize performance with the hand-controller. The small values for \(R-squared\) are common in models related to human performance because individuals are unpredictable. In spite of small \(R-squared\) values, the small \(p-value\) indicates a real relationship between the significant predictors and the response variable.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
sep = ""
)
)
df_transformed <-
rbind(
cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[2]][, 6:28],
subj_feature_norm[[1]][, 2:6], device = 'Excalibur'),
cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[3]][, 6:28],
subj_feature_norm[[2]][, 2:6], device = 'Premium'),
cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[4]][, 6:28],
subj_feature_norm[[3]][, 2:6], device = 'Sigma7')
)
# A4 is important
mlm1 <- lm(
rowMeans(
cbind(A5,
A6,
A7,
A8,
A9,
A13,
A15,
A16,
A17,
A18)
) ~
travel.distance + number.clutches + sum.gimbal.angle +
rms.force,
data = df_transformed
)
summary(mlm1)
##
## Call:
## lm(formula = rowMeans(cbind(A5, A6, A7, A8, A9, A13, A15, A16,
## A17, A18)) ~ travel.distance + number.clutches + sum.gimbal.angle +
## rms.force, data = df_transformed)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.49212 -0.09433 0.00032 0.11806 0.30620
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.78082 0.06115 12.769 <2e-16 ***
## travel.distance -0.28958 0.16869 -1.717 0.0898 .
## number.clutches -0.20339 0.08217 -2.475 0.0154 *
## sum.gimbal.angle 0.30530 0.20206 1.511 0.1347
## rms.force -0.01933 0.05702 -0.339 0.7355
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.1582 on 82 degrees of freedom
## (3 observations deleted due to missingness)
## Multiple R-squared: 0.1849, Adjusted R-squared: 0.1451
## F-statistic: 4.649 on 4 and 82 DF, p-value: 0.001968
The regression model formula is as follows:
\[ Perf_{subj} = 0.78 - 0.29 \ TD - 0.20 \ NC + 0.30 \ GA_{sum} - 0.02 \ RMSF, \]
where \(Perf_{subj}\) is the average subjective performance measure, \(TD\) is travel distance (measured from Kuka’s end effector), \(NC\) is the number of clutches, \(GA_{sum}\) is the sum of changes in gimbal angles, and \(RMSF\) is the squared root of sum of the squared forces, all associated with the objective performances from device sensors and calculated across the 16 (sub-)task repetitions.
After deriving the regression model equation, the equation was optimized in order to identify the features that contributed to achieving the highest performance with the hand-controller as follows:
f <- function(x) {
stopifnot(is.numeric(x))
stopifnot(is.finite(x))
stopifnot(length(x) == 4)
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
- 1 * (0.78 -
(0.29 * x1) -
(0.20 * x2) +
(0.30 * x3) -
(0.02 * x4))
}
ftoo <-
deriv(
expression(
- 1 * (0.78 -
(0.29 * x1) -
(0.20 * x2) +
(0.30 * x3) -
(0.02 * x4))
),
namevec = c("x1", "x2", "x3", "x4"),
function.arg = TRUE
)
ftootoo <- function(x) {
stopifnot(is.numeric(x))
stopifnot(is.finite(x))
stopifnot(length(x) == 4)
x1 <- x[1]
x2 <- x[2]
x3 <- x[3]
x4 <- x[4]
ftoo(x1, x2, x3, x4)
}
print("Negative of the equation is optimized to get the maximum:")
## [1] "Negative of the equation is optimized to get the maximum:"
optim(c(0.5, 0.5, 0.5, 0.5),
f,
function(x) attr(ftootoo(x), "gradient"),
method = "BFGS",
lower = c(0, 0, 0, 0),
upper = c(1, 1, 1, 1))
## $par
## [1] 0 0 1 0
##
## $value
## [1] -1.08
##
## $counts
## function gradient
## 8 8
##
## $convergence
## [1] 0
##
## $message
## [1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
In order to maximize the equation, the negative of the equation was minimized. The optimization was performed based on a quasi-Newton method (also known as a variable metric algorithm). This method uses function values and gradients to build up a picture of the surface to be optimized. The results suggest the following values on a scale of 0-1 for obtaining the maximum performance of the hand-controller:
\(TD = 0\)
\(NC = 0\)
\(GA_{sum} = 1\)
\(RMSF = 0\)
In order to apply a prediction algorithm for identifying the surgeon’s skill based on the hand-controller performance features, the first step would be to learn the patterns in data through exploration and visualization. In this section, a data visualization is provided to determine the correlation between the pairs of features and distribution of data for each class of Novice and Expert surgeons.
# reading data from the saved file
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
sep = ""
)
)
load(
paste(
"~/Desktop/Canada/neuroArm/neuroArmPLUS",
"/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
sep = ""
)
)
# find the average of features across 3 devices
subj_feature_norm_ESP <-
(
cbind(
as.numeric(subj_survey_norm[[2]][, 2]),
subj_survey_norm[[2]][, 3],
subj_feature_norm[[1]][, 2:5]
) +
cbind(
as.numeric(subj_survey_norm[[3]][, 2]),
subj_survey_norm[[3]][, 3],
subj_feature_norm[[2]][, 2:5]
) +
cbind(
as.numeric(subj_survey_norm[[4]][, 2]),
subj_survey_norm[[4]][, 3],
subj_feature_norm[[3]][, 2:5]
)
) / 3
experiece.level <- subj_survey_norm[[1]][, 3]
experiece.level[experiece.level == 1 | experiece.level == 2] <- "Novice"
experiece.level[experiece.level == 3 | experiece.level == 4] <- "Expert"
experiece.level[11] <- "Expert" # assigning the missing value
lr_data <- cbind(subj_feature_norm_ESP, as.character(experiece.level))
row_names <- experiece.level
col_names <- c("Completion Time", "Error Count", "Travel Distance", "Number of Clutches", "Force RMS", "Gimbal Angle Sum", "Experience Level")
colnames(lr_data) <- col_names
# replacing the missing error count with the mean of its group
lr_data[4, 2] <- mean(lr_data[lr_data[, 7] == "Novice", 2], na.rm = TRUE)
The figure below is the box plot of the features indicating the median, maximum, and minimum in data.
par(mfrow = c(1,6)) # number of subgraphs
par(pin = c(2,5)) # plot areas for graphs
for (i in 1:6) {
boxplot(lr_data[, i],
main = col_names[i],
outline = FALSE,
xlab = "Value",
cex.main = 3,
cex.lab = 2,
cex.axis = 2,
col = "gold"
)
}
The histograms, correlation plots and scatter plots in the figure below illustrate the data distribution in each class and the feature correlation. The distribution of each variable is shown in the diagonal line of boxes through the grid. The boxes below the diagonal line display the bivariate scatter plots. In the scatter plots, the red colored circles show Expert and the greens show Novice surgeons. The linear fits and the correlation ellipses are provided for the scattered data. The boxes above the diagnoal line provide the Pearson correlation and the significance level as stars. Each significance level is associated to a symbol: \(p-values\) (0, 0.001, 0.01, 0.05, 0.1, 1) corresponds to \(symbols\) (\("***"\), \("**"\), \("*"\), \("."\), \(" "\)).
# par(mfrow = c(1,1))
# par(pin = c(5,10)) # plot areas for graphs
# col1 <-
# colorRampPalette(
# c("goldenrod1",
# "gold",
# "gold1",
# "lightgoldenrod1",
# "lightgoldenrodyellow",
# "aquamarine",
# "aquamarine2",
# "aquamarine4"
# )
# )
# correlations <- cor(lr_data[, -7])
# res1 <- cor.mtest(correlations, conf.level = .95)
# corrplot(
# correlations,
# title = "The Graphical Display of the Features Correlation Matrix",
# cex.main = 3,
# type = "upper",
# order = "hclust", # reordering the fatures based on the correlation cluster
# p.mat = res1$p,
# sig.level = .2, # crossed the insignificant value according to the significant level
# method = "circle",
# col = col1(100),
# tl.col = "black",
# tl.cex = 2,
# cl.cex = 1.5,
# mar=c(0,0,5,0)
# )
# The distribution of each variable is shown on the diagonal.
# On the bottom of the diagonal :
# the bivariate scatter plots with a fitted line are displayed
# On the top of the diagonal :
# the value of the correlation plus the significance level as stars
# Each significance level is associated to a symbol :
# p-values(0, 0.001, 0.01, 0.05, 0.1, 1) <=> symbols(“***”, “**”, “*”, “.”, " “)
pairs.panels(
lr_data[, -7],
bg = c("firebrick1", "lightgreen")[lr_data$`Experience Level`],
hist.col = "lightgoldenrod1",
cor = TRUE,
method = "pearson",
main = "Histograms, Scatter Plots and Correlations of Features",
density = TRUE,
ellipses = TRUE,
stars = TRUE,
lm = TRUE,
pch = 21,
cex = 2,
cex.main = 3,
cex.cor = 0.5,
cex.axis = 2,
cex.labels = 2.5,
font.labels = 2
)
The distribution of data were normal or a mixture with the median points towards the lower values. This indicated existence of a few participants who had a noticeably different performance in comparison to the rest. All of the features were positively correlated except for wrist angular movements and number of clutches. Each clutch indicates a moment where the user had to revert the hand-controller back into its optimal workspace, which would require time and a distance travelled for the maneuver. These users spent less time in the optimal workspace and had to use more clutches. Therefore used more arm movements and less wrist movements and exerted higher forces. This likely related to forces being more difficult to control once out of the optimal workspace, i.e. approaching singularity, or these users were novices who had not yet finessed their surgical technique and motion. Those users who took longer to complete the task travelled longer distances on the board, used a higher number of clutches and required more force to complete the task. These findings are consistent with the evolution of a surgeon over a training period, where the individual demonstrates a tendency to use smaller movements with less force in the optimal workspace (Sugiyama et al., 2018 and Uemura et al., 2014). Interestingly, the error count did not correlate significantly with any of the features.
The density plots for the features are provided in in the figures below. The plots illustrate the level of separation between Experts and Novices with regards to each feature. In each plot there is overlap between the two groups, which highlights the difficulty of making a prediction. There were differences in the mean values of the Gimbal Angle Sum as well as the spread of data along the range for Force RMS, Completion Time, and Travel Distance, with a wider distribution (larger dispersion along the range) for the Novice group. Additionally, the Number of Clutches and Error Count follow a similar pattern with a semi-bimodal distribution for the Novice group.
featurePlot(
x = lr_data[,-7],
y = lr_data[, 7],
plot = "density",
scales = list(
x = list(relation = "free"),
y = list(relation = "free")
),
par.settings = list(
superpose.line = list(col = c("firebrick1", "lightgreen")),
strip.background = list(col = "lightgoldenrod1")
),
adjust = 1.5,
pch = "|",
col = c("firebrick1", "lightgreen"),
main = "Density Plots of the Surgeon Skill Features",
auto.key = list(columns = 3)
)
In this section, a binomial Logistic Regression model is developed after partitioning the data into training and testing sets (70% training and 30% testing). The summary of model is provided as follows where returns the estimate, standard errors, \(z-score\), and \(p-values\) on each of the coefficients. Although the factors seem to be not significant according to \(0.05\) threshold, the model is can be used as this insignificancy is mostly due to the small available sample size and the subjective differences among the individuals in each group of skill.
col_names <- c("completion.time", "error.count", "travel.distance", "number.clutches", "rms.force", "sum.gimbal.angle", "experience.level")
colnames(lr_data) <- col_names
## 70% of the sample size
smp_size <- floor(0.7 * nrow(lr_data))
## set the seed to make your partition reproducible
set.seed(123)
train_ind <- sample(seq_len(nrow(lr_data)), size = smp_size)
train <- lr_data[train_ind, ]
test <- lr_data[-train_ind, ]
glm.fit.1 <-
glm(
experience.level ~
number.clutches +
rms.force +
sum.gimbal.angle +
completion.time +
error.count +
travel.distance,
data = train,
family = binomial
)
glm.fit.2 <-
glm(
experience.level ~
rms.force +
sum.gimbal.angle +
completion.time +
travel.distance,
data = train,
family = binomial
)
summary(glm.fit.2)
##
## Call:
## glm(formula = experience.level ~ rms.force + sum.gimbal.angle +
## completion.time + travel.distance, family = binomial, data = train)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.0969 -1.0458 0.5701 0.7144 1.3985
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.415 3.203 1.066 0.286
## rms.force 6.300 4.087 1.541 0.123
## sum.gimbal.angle -1.945 2.433 -0.799 0.424
## completion.time -3.962 3.250 -1.219 0.223
## travel.distance -1.924 2.935 -0.655 0.512
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 26.734 on 20 degrees of freedom
## Residual deviance: 22.229 on 16 degrees of freedom
## AIC: 32.229
##
## Number of Fisher Scoring iterations: 4
To show the higher prediction power of the logistic regression model, reducing the number of predictors can be a solution. The significance of having a reduced model was evaluated by the likelihood ratio test, which compares the likelihood of the data in the full model against the reduced model. Considering the amount of overlap in the feature density plots, and following different experimentations, Error Count and Number of Clutches was removed from the logistic regression model allowing for a closer to significant \(p-values\), i.e. for rms.force. The likelihood test with \(p-values\) > 0.05 indicates that the predictor variable removal would not negatively impact the model fitness, i.e. difference between the two models is not statistically significant. Below is the result of the likelihood ratio test showing \(p-values\) > 0.05. Thus, it was safe to remove the number.clutches and error.count from the full model.
lr.test <- lrtest(glm.fit.1, glm.fit.2) # Likelihood ratio test
data.matrix(lr.test)
## #Df LogLik Df Chisq Pr(>Chisq)
## 1 7 -9.904265 NA NA NA
## 2 5 -11.114505 -2 2.420481 0.2981256
In order to assess the performance of logistic regression model regarding the proportion of variance in the dependent variable that is explained by the predictor (similar to ordinary least squares regression), Pseudo \(R^{2}\) statistic was calculated. Particularly, McFadden’s \(R^{2}\) defined as \(1-\frac{ln(LM)}{ln(L0)}\), where \(ln(LM)\) and \(ln(L0)\) are the log likelihoods of the fitted and null (only intercept presents) models, respectively. The McFadden’s \(R^{2}\) value ranges between 0-1, and the values close to 0 attribute to the models with no prediction power. In our case, this value is equal to 0.27, which is acceptable considering the nature of the problem and the available number of data samples.
pR2(glm.fit.2) # Pseudo R^2 - look for McFadden's R2 (0-1) - values close to 0 have no prediction power
## llh llhNull G2 McFadden r2ML r2CU
## -11.1145051 -13.3667975 4.5045848 0.1684990 0.1930584 0.2681302
The regression model formula is as follows:
\[ Skill_{subj} = 3.91 + 7.77 \ RMSF - 1.97 \ GA_{sum} - 4.81 \ CT - 2.85 \ TD, \]
where \(Skill_{subj}\) is the weight indicator for the skill group, \(CT\) is completion time, \(EC\) is error count, \(TD\) is travel distance (measured from Kuka’s end effector), \(RMSF\) is the squared root of sum of the squared forces, and \(GA_{sum}\) is the sum of changes in gimbal angles.
The skill prediction for the testing group of surgeons (9 surgeons) considering their performance features is provided below. The prediction is made based on the regression model previously trained on the training set of surgeons (21 surgeons). The results of 5-fold cross-validation indicate 78% accuracy, 33% sensitivity, and 100% specificity for the skill prediction using their hand-controller performance features (two misclassification over the 9 cases).
# 5-fold cross validation
ctrl <-
trainControl(method = "repeatedcv",
number = 5,
savePredictions = TRUE)
mod_fit <- train(
experience.level ~
rms.force +
sum.gimbal.angle +
completion.time +
travel.distance,
data = lr_data,
method = "glm",
family = "binomial",
trControl = ctrl,
tuneLength = 5
)
glm.pred = predict(mod_fit, newdata = test)
ptable <- confusionMatrix(data = glm.pred, test$experience.level)
ptable
## Confusion Matrix and Statistics
##
## Reference
## Prediction Expert Novice
## Expert 1 0
## Novice 2 6
##
## Accuracy : 0.7778
## 95% CI : (0.3999, 0.9719)
## No Information Rate : 0.6667
## P-Value [Acc > NIR] : 0.3772
##
## Kappa : 0.4
##
## Mcnemar's Test P-Value : 0.4795
##
## Sensitivity : 0.3333
## Specificity : 1.0000
## Pos Pred Value : 1.0000
## Neg Pred Value : 0.7500
## Prevalence : 0.3333
## Detection Rate : 0.1111
## Detection Prevalence : 0.1111
## Balanced Accuracy : 0.6667
##
## 'Positive' Class : Expert
##
fourfoldplot(ptable$table,
main = "Confusion Matrix")
# # AUC calculation
# glm.pred.auc = predict(glm.fit.2, newdata = test, type = "response")
# pred <- prediction(glm.pred.auc, test$experience.level)
# perf <- performance(pred, measure = "tpr", x.measure = "fpr")
# plot(perf)
# auc <- performance(pred, measure = "auc")
# auc <- auc@y.values[[1]]
# print(paste0("AUC = ", auc))
These results indicate that a desired device should have a minimum Travel Distance, Number of Clutches, and Force Magnitude. However, a maximum Gimbal Angle Sum will lead to a higher performance. In addition, the higher coefficients associated with \(TD\), \(NC\) and \(GA_{sum}\) in the regression equation, i.e. -0.29, -0.20 and 0.30, respectively, shows the higher contribution of travel distance, number of clutches, and wrist angular value maneuver to the performance of a hand-controller. In addition, an algorithm for surgeon skill prediction is provided that can predict the Novice and Expert surgeons based on their performance in using the hand-controllers with 78% accuracy.
Project neuroArm, Department of Clinical Neurosciences, University of Calgary. ↩
Project neuroArm, Department of Clinical Neurosciences, University of Calgary. ↩
Project neuroArm, Department of Clinical Neurosciences, University of Calgary. ↩
Project neuroArm, Department of Clinical Neurosciences, University of Calgary. ↩
Project neuroArm, Department of Clinical Neurosciences, University of Calgary. Corresponding author. garnette@ucalgary.ca.↩